4 resultados para lighting

em University of Queensland eSpace - Australia


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The cone photoreceptors of many vertebrates contain spherical organelles called oil droplets. In birds, turtles, lizards and some lungfish the oil droplets are heavily pigmented and function to filter the spectrum of light incident upon the visual pigment within the outer segment. Pigmented oil droplets are beneficial for colour discrimination in bright light, but at lower light levels the reduction in sensitivity caused by the pigmentation increasingly outweighs the benefits generated by spectral tuning. Consequently, it is expected that species with pigmented oil droplets should modulate the density of pigment in response to ambient light intensity and thereby regulate the amount of light transmitted to the outer segment. In this study, microspectrophotometry was used to measure the absorption spectra of cone oil droplets in chickens (Gallus gallus domesticus) reared under bright (unfiltered) or dim (filtered) sunlight. Oil droplet pigmentation was found to be dependent on the intensity of the ambient light and the duration of exposure to the different lighting treatments. In adult chickens reared in bright light, the oil droplets of all cone types (except the violet-sensitive single cones, whose oil droplet is always non-pigmented) were more densely pigmented than those in chickens reared in dim light. Calculations show that the reduced levels of oil droplet pigmentation in chickens reared in dim light would increase the sensitivity and spectral bandwidth of the outer segment significantly. The density of pigmentation in the oil droplets presumably represents a trade-off between the need for good colour discrimination and absolute sensitivity. This might also explain why nocturnal animals, or those that underwent a nocturnal phase during their evolution, have evolved oil droplets with low pigment densities or no pigmentation or have lost their oil droplets altogether.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Little is known about the quality of the images transmitted in email telemedicine systems. The present study was designed to survey the quality of images transmitted in the Swinfen Charitable Trust email referral system. Telemedicine cases were examined for a 3 month period in 2002 and a 3 month period in 2006. The number of cases with images attached increased from 8 (38%) to 37 (53%). There were four types of images (clinical photographs, microscope pictures, notes and X-ray images) and the proportion of radiology images increased from 27 to 48%. The cases in 2002 came from four different hospitals and were associated with seven different clinical specialties. In 2006, the cases came from 19 different hospitals and 20 different specialties. The 46 cases (from both study periods) had a total of 159 attached images. The quality of the images was assessed by awarding each image a score in four categories: focus, anatomical perspective, composition and lighting. The images were scored on a five-point scale (1 = very poor to 5 =very good) by a qualified medical photographer. In comparing image quality between the two study periods, there was some evidence that the quality had reduced, although the average size of the attached images had increased. The median score for all images in 2002 was 16 (interquartile range 14-19) and the median score in 2006 was 15 (13-16). The difference was significant (P < 0.001, Mann-Whitney test).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Most face recognition systems only work well under quite constrained environments. In particular, the illumination conditions, facial expressions and head pose must be tightly controlled for good recognition performance. In 2004, we proposed a new face recognition algorithm, Adaptive Principal Component Analysis (APCA) [4], which performs well against both lighting variation and expression change. But like other eigenface-derived face recognition algorithms, APCA only performs well with frontal face images. The work presented in this paper is an extension of our previous work to also accommodate variations in head pose. Following the approach of Cootes et al, we develop a face model and a rotation model which can be used to interpret facial features and synthesize realistic frontal face images when given a single novel face image. We use a Viola-Jones based face detector to detect the face in real-time and thus solve the initialization problem for our Active Appearance Model search. Experiments show that our approach can achieve good recognition rates on face images across a wide range of head poses. Indeed recognition rates are improved by up to a factor of 5 compared to standard PCA.